272 research outputs found

    A constructive mean field analysis of multi population neural networks with random synaptic weights and stochastic inputs

    Get PDF
    We deal with the problem of bridging the gap between two scales in neuronal modeling. At the first (microscopic) scale, neurons are considered individually and their behavior described by stochastic differential equations that govern the time variations of their membrane potentials. They are coupled by synaptic connections acting on their resulting activity, a nonlinear function of their membrane potential. At the second (mesoscopic) scale, interacting populations of neurons are described individually by similar equations. The equations describing the dynamical and the stationary mean field behaviors are considered as functional equations on a set of stochastic processes. Using this new point of view allows us to prove that these equations are well-posed on any finite time interval and to provide a constructive method for effectively computing their unique solution. This method is proved to converge to the unique solution and we characterize its complexity and convergence rate. We also provide partial results for the stationary problem on infinite time intervals. These results shed some new light on such neural mass models as the one of Jansen and Rit \cite{jansen-rit:95}: their dynamics appears as a coarse approximation of the much richer dynamics that emerges from our analysis. Our numerical experiments confirm that the framework we propose and the numerical methods we derive from it provide a new and powerful tool for the exploration of neural behaviors at different scales.Comment: 55 pages, 4 figures, to appear in "Frontiers in Neuroscience

    Dynamics and spike trains statistics in conductance-based Integrate-and-Fire neural networks with chemical and electric synapses

    Get PDF
    We investigate the effect of electric synapses (gap junctions) on collective neuronal dynamics and spike statistics in a conductance-based Integrate-and-Fire neural network, driven by a Brownian noise, where conductances depend upon spike history. We compute explicitly the time evolution operator and show that, given the spike-history of the network and the membrane potentials at a given time, the further dynamical evolution can be written in a closed form. We show that spike train statistics is described by a Gibbs distribution whose potential can be approximated with an explicit formula, when the noise is weak. This potential form encompasses existing models for spike trains statistics analysis such as maximum entropy models or Generalized Linear Models (GLM). We also discuss the different types of correlations: those induced by a shared stimulus and those induced by neurons interactions.Comment: 42 pages, 1 figure, submitte

    Exact computation of the Maximum Entropy Potential of spiking neural networks models

    Get PDF
    Understanding how stimuli and synaptic connectivity in uence the statistics of spike patterns in neural networks is a central question in computational neuroscience. Maximum Entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. But, in spite of good performance in terms of prediction, the fitting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuro-mimetic models) provide a probabilistic mapping between stimulus, network architecture and spike patterns in terms of conditional proba- bilities. In this paper we build an exact analytical mapping between neuro-mimetic and Maximum Entropy models.Comment: arXiv admin note: text overlap with arXiv:1309.587

    Parameters estimation for spatio-temporal maximum entropy distributions: application to neural spike trains

    Get PDF
    We propose a numerical method to learn Maximum Entropy (MaxEnt) distributions with spatio-temporal constraints from experimental spike trains. This is an extension of two papers [10] and [4] who proposed the estimation of parameters where only spatial constraints were taken into account. The extension we propose allows to properly handle memory effects in spike statistics, for large sized neural networks.Comment: 34 pages, 33 figure

    Linear response for spiking neuronal networks with unbounded memory

    Get PDF
    We establish a general linear response relation for spiking neuronal networks, based on chains with unbounded memory. This relation allows us to predict the influence of a weak amplitude time-dependent external stimuli on spatio-temporal spike correlations, from the spontaneous statistics (without stimulus) in a general context where the memory in spike dynamics can extend arbitrarily far in the past. Using this approach, we show how linear response is explicitly related to neuronal dynamics with an example, the gIF model, introduced by M. Rudolph and A. Destexhe. This example illustrates the collective effect of the stimuli, intrinsic neuronal dynamics, and network connectivity on spike statistics. We illustrate our results with numerical simulations.Comment: 60 pages, 8 figure

    Spatio-temporal spike trains analysis for large scale networks using maximum entropy principle and Monte-Carlo method

    Full text link
    Understanding the dynamics of neural networks is a major challenge in experimental neuroscience. For that purpose, a modelling of the recorded activity that reproduces the main statistics of the data is required. In a first part, we present a review on recent results dealing with spike train statistics analysis using maximum entropy models (MaxEnt). Most of these studies have been focusing on modelling synchronous spike patterns, leaving aside the temporal dynamics of the neural activity. However, the maximum entropy principle can be generalized to the temporal case, leading to Markovian models where memory effects and time correlations in the dynamics are properly taken into account. In a second part, we present a new method based on Monte-Carlo sampling which is suited for the fitting of large-scale spatio-temporal MaxEnt models. The formalism and the tools presented here will be essential to fit MaxEnt spatio-temporal models to large neural ensembles.Comment: 41 pages, 10 figure

    A mathematical analysis of the effects of Hebbian learning rules on the dynamics and structure of discrete-time random recurrent neural networks

    Get PDF
    We present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks, with a generic Hebbian learning rule including passive forgetting and different time scales for neuronal activity and learning dynamics. Previous numerical works have reported that Hebbian learning drives the system from chaos to a steady state through a sequence of bifurcations. Here, we interpret these results mathematically and show that these effects, involving a complex coupling between neuronal dynamics and synaptic graph structure, can be analyzed using Jacobian matrices, which introduce both a structural and a dynamical point of view on the neural network evolution. Furthermore, we show that the sensitivity to a learned pattern is maximal when the largest Lyapunov exponent is close to 0. We discuss how neural networks may take advantage of this regime of high functional interest

    Spike trains statistics in Integrate and Fire Models: exact results

    Get PDF
    We briefly review and highlight the consequences of rigorous and exact results obtained in \cite{cessac:10}, characterizing the statistics of spike trains in a network of leaky Integrate-and-Fire neurons, where time is discrete and where neurons are subject to noise, without restriction on the synaptic weights connectivity. The main result is that spike trains statistics are characterized by a Gibbs distribution, whose potential is explicitly computable. This establishes, on one hand, a rigorous ground for the current investigations attempting to characterize real spike trains data with Gibbs distributions, such as the Ising-like distribution, using the maximal entropy principle. However, it transpires from the present analysis that the Ising model might be a rather weak approximation. Indeed, the Gibbs potential (the formal "Hamiltonian") is the log of the so-called "conditional intensity" (the probability that a neuron fires given the past of the whole network). But, in the present example, this probability has an infinite memory, and the corresponding process is non-Markovian (resp. the Gibbs potential has infinite range). Moreover, causality implies that the conditional intensity does not depend on the state of the neurons at the \textit{same time}, ruling out the Ising model as a candidate for an exact characterization of spike trains statistics. However, Markovian approximations can be proposed whose degree of approximation can be rigorously controlled. In this setting, Ising model appears as the "next step" after the Bernoulli model (independent neurons) since it introduces spatial pairwise correlations, but not time correlations. The range of validity of this approximation is discussed together with possible approaches allowing to introduce time correlations, with algorithmic extensions.Comment: 6 pages, submitted to conference NeuroComp2010 http://2010.neurocomp.fr/; Bruno Cessac http://www-sop.inria.fr/neuromathcomp

    Stochastic firing rate models

    Full text link
    We review a recent approach to the mean-field limits in neural networks that takes into account the stochastic nature of input current and the uncertainty in synaptic coupling. This approach was proved to be a rigorous limit of the network equations in a general setting, and we express here the results in a more customary and simpler framework. We propose a heuristic argument to derive these equations providing a more intuitive understanding of their origin. These equations are characterized by a strong coupling between the different moments of the solutions. We analyse the equations, present an algorithm to simulate the solutions of these mean-field equations, and investigate numerically the equations. In particular, we build a bridge between these equations and Sompolinsky and collaborators approach (1988, 1990), and show how the coupling between the mean and the covariance function deviates from customary approaches
    corecore